Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS Med ; 20(10): e1004306, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37906614

ABSTRACT

BACKGROUND: Clinical trial registries allow assessment of deviations of published trials from their protocol, which may indicate a considerable risk of bias. However, since entries in many registries can be updated at any time, deviations may go unnoticed. We aimed to assess the frequency of changes to primary outcomes in different historical versions of registry entries, and how often they would go unnoticed if only deviations between published trial reports and the most recent registry entry are assessed. METHODS AND FINDINGS: We analyzed the complete history of changes of registry entries in all 1746 randomized controlled trials completed at German university medical centers between 2009 and 2017, with published results up to 2022, that were registered in ClinicalTrials.gov or the German WHO primary registry (German Clinical Trials Register; DRKS). Data were retrieved on 24 January 2022. We assessed deviations between registry entries and publications in a random subsample of 292 trials. We determined changes of primary outcomes (1) between different versions of registry entries at key trial milestones, (2) between the latest registry entry version and the results publication, and (3) changes that occurred after trial start with no change between latest registry entry version and publication (so that assessing the full history of changes is required for detection of changes). We categorized changes as major if primary outcomes were added, dropped, changed to secondary outcomes, or secondary outcomes were turned into primary outcomes. We also assessed (4) the proportion of publications transparently reporting changes and (5) characteristics associated with changes. Of all 1746 trials, 23% (n = 393) had a primary outcome change between trial start and latest registry entry version, with 8% (n = 142) being major changes, that is, primary outcomes were added, dropped, changed to secondary outcomes, or secondary outcomes were turned into primary outcomes. Primary outcomes in publications were different from the latest registry entry version in 41% of trials (120 of the 292 sampled trials; 95% confidence interval (CI) [35%, 47%]), with major changes in 18% (54 of 292; 95% CI [14%, 23%]). Overall, 55% of trials (161 of 292; 95% CI [49%, 61%]) had primary outcome changes at any timepoint over the course of a trial, with 23% of trials (67 of 292; 95% CI [18%, 28%]) having major changes. Changes only within registry records, with no apparent discrepancy between latest registry entry version and publication, were observed in 14% of trials (41 of 292; 95% CI [10%, 19%]), with 4% (13 of 292; 95% CI [2%, 7%]) being major changes. One percent of trials with a change reported this in their publication (2 of 161 trials; 95% CI [0%, 4%]). An exploratory logistic regression analysis indicated that trials were less likely to have a discrepant registry entry if they were registered more recently (odds ratio (OR) 0.74; 95% CI [0.69, 0.80]; p<0.001), were not registered on ClinicalTrials.gov (OR 0.41; 95% CI [0.23, 0.70]; p = 0.002), or were not industry-sponsored (OR 0.29; 95% CI [0.21, 0.41]; p<0.001). Key limitations include some degree of subjectivity in the categorization of outcome changes and inclusion of a single geographic region. CONCLUSIONS: In this study, we observed that changes to primary outcomes occur in 55% of trials, with 23% trials having major changes. They are rarely transparently reported in the results publication and often not visible in the latest registry entry version. More transparency is needed, supported by deeper analysis of registry entries to make these changes more easily recognizable. Protocol registration: Open Science Framework (https://osf.io/t3qva; amendment in https://osf.io/qtd2b).


Subject(s)
Universities , Humans , Bias , Registries , Odds Ratio
2.
J Clin Transl Sci ; 7(1): e166, 2023.
Article in English | MEDLINE | ID: mdl-37588679

ABSTRACT

Objectives: Assess the extent to which the clinical trial registration and reporting policies of 25 of the world's largest public and philanthropic medical research funders meet best practice benchmarks as stipulated by the 2017 WHO Joint Statement, and document changes in the policies and monitoring systems of 19 European funders over the past year. Design Setting Participants: Cross-sectional study, based on assessments of each funder's publicly available documentation plus validation of results by funders. Our cohort includes 25 of the largest medical research funders in Europe, Oceania, South Asia, and Canada. Interventions: Scoring all 25 funders using an 11-item assessment tool based on WHO best practice benchmarks, grouped into three primary categories: trial registries, academic publication, and monitoring, plus validation of results by funders. Main outcome measures: How many of the 11 WHO best practice items each of the 25 funders has put into place, and changes in the performance of 19 previously assessed funders over the preceding year. Results: The 25 funders we assessed had put into place an average of 5/11 (49%) WHO best practices. Only 6/25 funders (24%) took the PI's past reporting record into account during grant application reviews. Funders' performance varied widely from 0/11 to 11/11 WHO best practices adopted. Of the 19 funders for which 2021(2) baseline data was available, 10/19 (53%) had strengthened their policies over the preceding year. Conclusions: Most medical research funders need to do more to curb research waste and publication bias by strengthening their clinical trial policies.

3.
BMJ Open ; 13(4): e069553, 2023 04 18.
Article in English | MEDLINE | ID: mdl-37072362

ABSTRACT

OBJECTIVE: Prospective registration has been widely implemented and accepted as a best practice in clinical research, but retrospective registration is still commonly found. We assessed to what extent retrospective registration is reported transparently in journal publications and investigated factors associated with transparent reporting. DESIGN: We used a dataset of trials registered in ClinicalTrials.gov or Deutsches Register Klinischer Studien, with a German University Medical Center as the lead centre, completed in 2009-2017, and with a corresponding peer-reviewed results publication. We extracted all registration statements from results publications of retrospectively registered trials and assessed whether they mention or justify the retrospective registration. We analysed associations of retrospective registration and reporting thereof with registration number reporting, International Committee of Medical Journal Editors (ICMJE) membership/-following and industry sponsorship using χ2 or Fisher exact test. RESULTS: In the dataset of 1927 trials with a corresponding results publication, 956 (53.7%) were retrospectively registered. Of those, 2.2% (21) explicitly report the retrospective registration in the abstract and 3.5% (33) in the full text. In 2.1% (20) of publications, authors provide an explanation for the retrospective registration in the full text. Registration numbers were significantly underreported in abstracts of retrospectively registered trials compared with prospectively registered trials. Publications in ICMJE member journals did not have statistically significantly higher rates of both prospective registration and disclosure of retrospective registration, and publications in journals claiming to follow ICMJE recommendations showed statistically significantly lower rates compared with non-ICMJE-following journals. Industry sponsorship of trials was significantly associated with higher rates of prospective registration, but not with transparent registration reporting. CONCLUSIONS: Contrary to ICMJE guidance, retrospective registration is disclosed and explained only in a small number of retrospectively registered studies. Disclosure of the retrospective nature of the registration would require a brief statement in the manuscript and could be easily implemented by journals.


Subject(s)
Disclosure , Peer Review , Humans , Cross-Sectional Studies , Prospective Studies , Retrospective Studies , Registries
4.
Br J Clin Pharmacol ; 89(1): 340-350, 2023 01.
Article in English | MEDLINE | ID: mdl-35986927

ABSTRACT

AIMS: Research ethics committees and regulatory agencies assess whether the benefits of a proposed early-stage clinical trial outweigh the risks based on preclinical studies reported in investigator's brochures (IBs). Recent studies have indicated that the reporting of preclinical evidence presented in IBs does not enable proper risk-benefit assessment. We interviewed different stakeholders (regulators, research ethics committee members, preclinical and clinical researchers, ethicists, and metaresearchers) about their views on measures to increase the completeness and robustness of preclinical evidence reporting in IBs. METHODS: This study was preregistered (https://osf.io/nvzwy/). We used purposive sampling and invited stakeholders to participate in an online semistructured interview between March and June 2021. Themes were derived using inductive content analysis. We used a strengths, weaknesses, opportunities and threats matrix to categorize our findings. RESULTS: Twenty-seven international stakeholders participated. The interviewees pointed to several strengths and opportunities to improve completeness and robustness, mainly more transparent and systematic justifications for the included studies. However, weaknesses and threats were mentioned that could undermine efforts to enable a more thorough assessment: The interviewees stressed that current review practices are sufficient to ensure the safe conduct of first-in-human trials. They feared that changes to the IB structure or review process could overburden stakeholders and slow drug development. CONCLUSION: In principle, more robust decision-making processes align with the interests of all stakeholders and with many current initiatives to increase the translatability of preclinical research and limit uninformative or ill-justified trials early in the development process. Further research should investigate measures that could be implemented to benefit all stakeholders.


Subject(s)
Pamphlets , Humans , Ethics Committees, Research , Research Design , Risk Assessment
5.
Diagn Progn Res ; 6(1): 4, 2022 Mar 24.
Article in English | MEDLINE | ID: mdl-35321760

ABSTRACT

BACKGROUND: With rising cost pressures on health care systems, machine-learning (ML)-based algorithms are increasingly used to predict health care costs. Despite their potential advantages, the successful implementation of these methods could be undermined by biases introduced in the design, conduct, or analysis of studies seeking to develop and/or validate ML models. The utility of such models may also be negatively affected by poor reporting of these studies. In this systematic review, we aim to evaluate the reporting quality, methodological characteristics, and risk of bias of ML-based prediction models for individual-level health care spending. METHODS: We will systematically search PubMed and Embase to identify studies developing, updating, or validating ML-based models to predict an individual's health care spending for any medical condition, over any time period, and in any setting. We will exclude prediction models of aggregate-level health care spending, models used to infer causality, models using radiomics or speech parameters, models of non-clinically validated predictors (e.g., genomics), and cost-effectiveness analyses without predicting individual-level health care spending. We will extract data based on the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies (CHARMS), previously published research, and relevant recommendations. We will assess the adherence of ML-based studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement and examine the inclusion of transparency and reproducibility indicators (e.g. statements on data sharing). To assess the risk of bias, we will apply the Prediction model Risk Of Bias Assessment Tool (PROBAST). Findings will be stratified by study design, ML methods used, population characteristics, and medical field. DISCUSSION: Our systematic review will appraise the quality, reporting, and risk of bias of ML-based models for individualized health care cost prediction. This review will provide an overview of the available models and give insights into the strengths and limitations of using ML methods for the prediction of health spending.

SELECTION OF CITATIONS
SEARCH DETAIL
...